skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Sinz, Fabian H"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Deciphering the brain’s structure-function relationship is key to understanding the neuronal mechanisms underlying perception and cognition. The cortical column, a vertical organization of neurons with similar functions, is a classic example of primate neocortex structure-function organization. While columns have been identified in primary sensory areas using parametric stimuli, their prevalence across higher-level cortex is debated, particularly regarding complex tuning in natural image space. However, a key hurdle in identifying columns is characterizing the complex, nonlinear tuning of neurons to high-dimensional sensory inputs. Building on prior findings of topological organization for features like color and orientation, we investigate functional clustering in macaque visual area V4 in non-parametric natural image space, using large-scale recordings and deep learning–based analysis. We combined linear probe recordings with deep learning methods to systematically characterize the tuning of >1,200 V4 neurons using in silico synthesis of most exciting images (MEIs), followed by in vivo verification. Single V4 neurons exhibited MEIs containing complex features, including textures and shapes, and even high-level attributes with eye-like appearance. Neurons recorded on the same silicon probe, inserted orthogonal to the cortical surface, often exhibited similarities in their spatial feature selectivity, suggesting a degree of functional organization along the cortical depth. We quantified MEI similarity using human psychophysics and distances in a contrastive learning-derived embedding space. Moreover, the selectivity of the V4 neuronal population showed evidence of clustering into functional groups of shared feature selectivity. These functional groups showed parallels with the feature maps of units in artificial vision systems, suggesting potential shared encoding strategies. These results demonstrate the feasibility and scalability of deep learning–based functional characterization of neuronal selectivity in naturalistic visual contexts, offering a framework for quantitatively mapping cortical organization across multiple levels of the visual hierarchy. 
    more » « less
    Free, publicly-accessible full text available November 5, 2026
  2. Einhäuser, Wolfgang (Ed.)
    Responses to natural stimuli in area V4—a mid-level area of the visual ventral stream—are well predicted by features from convolutional neural networks (CNNs) trained on image classification. This result has been taken as evidence for the functional role of V4 in object classification. However, we currently do not know if and to what extent V4 plays a role in solving other computational objectives. Here, we investigated normative accounts of V4 (and V1 for comparison) by predicting macaque single-neuron responses to natural images from the representations extracted by 23 CNNs trained on different computer vision tasks including semantic, geometric, 2D, and 3D types of tasks. We found that V4 was best predicted by semantic classification features and exhibited high task selectivity, while the choice of task was less consequential to V1 performance. Consistent with traditional characterizations of V4 function that show its high-dimensional tuning to various 2D and 3D stimulus directions, we found that diverse non-semantic tasks explained aspects of V4 function that are not captured by individual semantic tasks. Nevertheless, jointly considering the features of a pair of semantic classification tasks was sufficient to yield one of our top V4 models, solidifying V4’s main functional role in semantic processing and suggesting that V4’s selectivity to 2D or 3D stimulus properties found by electrophysiologists can result from semantic functional goals. 
    more » « less
  3. Abstract Understanding the brain requires understanding neurons’ functional responses to the circuit architecture shaping them. Here we introduce the MICrONS functional connectomics dataset with dense calcium imaging of around 75,000 neurons in primary visual cortex (VISp) and higher visual areas (VISrl, VISal and VISlm) in an awake mouse that is viewing natural and synthetic stimuli. These data are co-registered with an electron microscopy reconstruction containing more than 200,000 cells and 0.5 billion synapses. Proofreading of a subset of neurons yielded reconstructions that include complete dendritic trees as well the local and inter-areal axonal projections that map up to thousands of cell-to-cell connections per neuron. Released as an open-access resource, this dataset includes the tools for data retrieval and analysis1,2. Accompanying studies describe its use for comprehensive characterization of cell types3–6, a synaptic level connectivity diagram of a cortical column4, and uncovering cell-type-specific inhibitory connectivity that can be linked to gene expression data4,7. Functionally, we identify new computational principles of how information is integrated across visual space8, characterize novel types of neuronal invariances9and bring structure and function together to uncover a general principle for connectivity between excitatory neurons within and across areas10,11
    more » « less
    Free, publicly-accessible full text available April 10, 2026
  4. null (Ed.)